Skip to content

fix(cache): reduce tx cache retention#3299

Merged
tac0turtle merged 3 commits intomainfrom
julien/cache-mem
Apr 28, 2026
Merged

fix(cache): reduce tx cache retention#3299
tac0turtle merged 3 commits intomainfrom
julien/cache-mem

Conversation

@julienrbrt
Copy link
Copy Markdown
Member

@julienrbrt julienrbrt commented Apr 28, 2026

Overview

Extract and tweak some useful improvements from #3290

Summary by CodeRabbit

  • Bug Fixes
    • Reduced transaction cache retention period and optimized cleanup frequency to prevent out-of-memory conditions during heavy transaction load, improving system stability.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

This PR reduces transaction cache retention from 24 hours to 30 minutes to prevent out-of-memory conditions during heavy transaction load, and adjusts the cleanup frequency to run at one-tenth the retention interval with a 15-second minimum.

Changes

Cohort / File(s) Summary
Documentation
CHANGELOG.md
Added changelog entry documenting the transaction cache retention reduction.
Cache Configuration
block/internal/cache/manager.go
Reduced DefaultTxCacheRetention constant from 24 hours to 30 minutes to limit memory usage.
Cleanup Scheduling
block/internal/reaping/reaper.go
Changed cleanup ticker from fixed 1-hour interval to dynamic interval calculated as DefaultTxCacheRetention/10 with a 15-second minimum bound.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Suggested reviewers

  • tac0turtle

Poem

🐰 Cache shrinks from day to night,
Thirty minutes feels just right!
Memory breathes, no overflow,
Cleanup dances, swift and low. ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description is incomplete. It references an external PR but lacks specific details about changes made, rationale for reduction, and impact on system behavior. Expand the description to explain why cache retention is being reduced (OOM prevention), what the new retention period is, and how the cleanup interval was adjusted.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix(cache): reduce tx cache retention' directly and clearly describes the main change: reducing transaction cache retention time to prevent OOM conditions.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch julien/cache-mem

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 28, 2026

Claude finished @julienrbrt's task in 1m 22s —— View job


Code Review

  • Gather context and understand the PR
  • Read changed files
  • Analyze code quality, architecture, and correctness
  • Post review feedback

This is a small, focused fix to reduce memory pressure from the transaction seen-hash cache. The changes are clean. A few observations:


block/internal/cache/manager.go

DefaultTxCacheRetention: 24h → 30 min

The reduction is drastic (48×). The fix is correct for the OOM problem, but the tradeoff should be called out explicitly: the deduplication window is now 30 minutes. If a node restarts or is offline for >30 minutes and has already submitted transactions to the sequencer that haven't been included in a block yet, those txs can be re-submitted, potentially producing duplicates in the sequencer queue. This is likely an acceptable tradeoff (duplicates are better than OOMs), but it's worth documenting whether the sequencer handles idempotency, or whether this window should be configurable.


block/internal/reaping/reaper.go

CleanupInterval = max(cache.DefaultTxCacheRetention/10, 15*time.Second)

This is a nice improvement over the previous hardcoded 1h — it creates a proportional invariant (cleanup runs ~10× per retention window). With 30 min retention, this evaluates to 3 min, which is much more responsive than the old 1 hour.

The 15*time.Second floor is a sensible safeguard. One minor note: this constant is evaluated at compile time using a const-level expression involving max over time.Duration values — this is valid Go 1.21+ and works correctly.

Cleanup call in drainMempool

// block/internal/reaping/reaper.go:161
removed := r.cache.CleanupOldTxs(cache.DefaultTxCacheRetention)

This explicitly passes cache.DefaultTxCacheRetention to CleanupOldTxs, even though CleanupOldTxs already defaults to DefaultTxCacheRetention when given ≤ 0. This is fine, but using 0 (or a named constant like cache.DefaultTxCacheRetention) vs explicit passing is a style choice — being explicit is arguably clearer here. Fix this →


Testing

No tests were added or updated. Given the change reduces an important operational parameter by 48×, it would be worth having at least one test asserting that entries older than DefaultTxCacheRetention are evicted by CleanupOldTxs, and entries newer are not. This guards against regressions if the constant is changed again. Fix this →


Summary

The changes are correct, minimal, and well-reasoned. The CleanupInterval formula is a nice touch. The main design question is whether 30 minutes is the right floor for all deployments or whether this should be made configurable. For now, the constant approach is pragmatic and solves the immediate issue.

@julienrbrt julienrbrt marked this pull request as draft April 28, 2026 18:06
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 28, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedApr 28, 2026, 6:12 PM

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 62.50%. Comparing base (389e904) to head (0311231).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3299      +/-   ##
==========================================
+ Coverage   62.48%   62.50%   +0.02%     
==========================================
  Files         122      122              
  Lines       13047    13047              
==========================================
+ Hits         8152     8155       +3     
+ Misses       4009     4008       -1     
+ Partials      886      884       -2     
Flag Coverage Δ
combined 62.50% <ø> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@julienrbrt julienrbrt marked this pull request as ready for review April 28, 2026 18:52
@julienrbrt julienrbrt requested a review from tac0turtle April 28, 2026 18:52
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
block/internal/cache/manager.go (1)

29-30: Consider making tx retention configurable instead of fixed.
At Line 30, a single hardcoded retention may be too aggressive or too lenient across different networks/workloads. Keeping 30m as default but exposing config would give operators safer tuning control.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/cache/manager.go` around lines 29 - 30, Replace the hardcoded
DefaultTxCacheRetention with a configurable retention by keeping
DefaultTxCacheRetention as the default value but adding a configurable
field/parameter (e.g., TxCacheRetention) to the cache manager config or
constructor; update the cache manager initialization (where New/Init for the
manager is called) to accept an optional retention value (read from a config
struct or env flag) and fall back to DefaultTxCacheRetention when unset, and
ensure any uses of DefaultTxCacheRetention (symbol DefaultTxCacheRetention) are
switched to reference the configured retention field so operators can tune
retention per deployment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@block/internal/cache/manager.go`:
- Around line 29-30: Replace the hardcoded DefaultTxCacheRetention with a
configurable retention by keeping DefaultTxCacheRetention as the default value
but adding a configurable field/parameter (e.g., TxCacheRetention) to the cache
manager config or constructor; update the cache manager initialization (where
New/Init for the manager is called) to accept an optional retention value (read
from a config struct or env flag) and fall back to DefaultTxCacheRetention when
unset, and ensure any uses of DefaultTxCacheRetention (symbol
DefaultTxCacheRetention) are switched to reference the configured retention
field so operators can tune retention per deployment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e5c5a916-8385-4d53-aecb-fa45a37152db

📥 Commits

Reviewing files that changed from the base of the PR and between 389e904 and 0311231.

📒 Files selected for processing (3)
  • CHANGELOG.md
  • block/internal/cache/manager.go
  • block/internal/reaping/reaper.go

@tac0turtle tac0turtle merged commit 81c1c25 into main Apr 28, 2026
35 of 36 checks passed
@tac0turtle tac0turtle deleted the julien/cache-mem branch April 28, 2026 19:17
julienrbrt added a commit that referenced this pull request May 8, 2026
* build(deps): Bump the all-go group across 8 directories with 7 updates (#3291)

* build(deps): Bump the all-go group across 8 directories with 7 updates

Bumps the all-go group with 5 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [github.com/aws/aws-sdk-go-v2/service/kms](https://github.com/aws/aws-sdk-go-v2) | `1.50.5` | `1.51.0` |
| [github.com/aws/smithy-go](https://github.com/aws/smithy-go) | `1.25.0` | `1.25.1` |
| [github.com/libp2p/go-libp2p-pubsub](https://github.com/libp2p/go-libp2p-pubsub) | `0.15.0` | `0.16.0` |
| [github.com/rs/zerolog](https://github.com/rs/zerolog) | `1.35.0` | `1.35.1` |
| [google.golang.org/api](https://github.com/googleapis/google-api-go-client) | `0.274.0` | `0.276.0` |

Bumps the all-go group with 1 update in the /apps/evm directory: [github.com/rs/zerolog](https://github.com/rs/zerolog).
Bumps the all-go group with 1 update in the /apps/grpc directory: [github.com/rs/zerolog](https://github.com/rs/zerolog).
Bumps the all-go group with 1 update in the /apps/testapp directory: [github.com/rs/zerolog](https://github.com/rs/zerolog).
Bumps the all-go group with 2 updates in the /execution/evm directory: [github.com/rs/zerolog](https://github.com/rs/zerolog) and [github.com/evstack/ev-node](https://github.com/evstack/ev-node).
Bumps the all-go group with 1 update in the /execution/grpc directory: [github.com/evstack/ev-node](https://github.com/evstack/ev-node).
Bumps the all-go group with 1 update in the /test/docker-e2e directory: [github.com/celestiaorg/tastora](https://github.com/celestiaorg/tastora).
Bumps the all-go group with 2 updates in the /test/e2e directory: [github.com/rs/zerolog](https://github.com/rs/zerolog) and [github.com/celestiaorg/tastora](https://github.com/celestiaorg/tastora).


Updates `github.com/aws/aws-sdk-go-v2/service/kms` from 1.50.5 to 1.51.0
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](aws/aws-sdk-go-v2@service/ssm/v1.50.5...service/s3/v1.51.0)

Updates `github.com/aws/smithy-go` from 1.25.0 to 1.25.1
- [Release notes](https://github.com/aws/smithy-go/releases)
- [Changelog](https://github.com/aws/smithy-go/blob/main/CHANGELOG.md)
- [Commits](aws/smithy-go@v1.25.0...v1.25.1)

Updates `github.com/libp2p/go-libp2p-pubsub` from 0.15.0 to 0.16.0
- [Release notes](https://github.com/libp2p/go-libp2p-pubsub/releases)
- [Commits](libp2p/go-libp2p-pubsub@v0.15.0...v0.16.0)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `google.golang.org/api` from 0.274.0 to 0.276.0
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](googleapis/google-api-go-client@v0.274.0...v0.276.0)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `github.com/evstack/ev-node` from 1.1.0 to 1.1.1
- [Release notes](https://github.com/evstack/ev-node/releases)
- [Changelog](https://github.com/evstack/ev-node/blob/main/CHANGELOG.md)
- [Commits](v1.1.0...v1.1.1)

Updates `github.com/evstack/ev-node` from 1.1.0 to 1.1.1
- [Release notes](https://github.com/evstack/ev-node/releases)
- [Changelog](https://github.com/evstack/ev-node/blob/main/CHANGELOG.md)
- [Commits](v1.1.0...v1.1.1)

Updates `github.com/celestiaorg/tastora` from 0.17.0 to 0.19.0
- [Release notes](https://github.com/celestiaorg/tastora/releases)
- [Commits](celestiaorg/tastora@v0.17.0...v0.19.0)

Updates `github.com/rs/zerolog` from 1.35.0 to 1.35.1
- [Commits](rs/zerolog@v1.35.0...v1.35.1)

Updates `github.com/celestiaorg/tastora` from 0.16.1-0.20260312082036-2ee1b0a2ac4e to 0.19.0
- [Release notes](https://github.com/celestiaorg/tastora/releases)
- [Commits](celestiaorg/tastora@v0.17.0...v0.19.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/kms
  dependency-version: 1.51.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go
- dependency-name: github.com/aws/smithy-go
  dependency-version: 1.25.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/libp2p/go-libp2p-pubsub
  dependency-version: 0.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: google.golang.org/api
  dependency-version: 0.276.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/evstack/ev-node
  dependency-version: 1.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/evstack/ev-node
  dependency-version: 1.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/celestiaorg/tastora
  dependency-version: 0.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go
- dependency-name: github.com/rs/zerolog
  dependency-version: 1.35.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go
- dependency-name: github.com/celestiaorg/tastora
  dependency-version: 0.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go
...

Signed-off-by: dependabot[bot] <support@github.com>

* tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Julien Robert <julien@rbrt.fr>

* build(deps): Bump postcss from 8.5.8 to 8.5.12 in /docs in the npm_and_yarn group across 1 directory (#3292)

build(deps): Bump postcss

Bumps the npm_and_yarn group with 1 update in the /docs directory: [postcss](https://github.com/postcss/postcss).


Updates `postcss` from 8.5.8 to 8.5.12
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](postcss/postcss@8.5.8...8.5.12)

---
updated-dependencies:
- dependency-name: postcss
  dependency-version: 8.5.12
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* ci: skip code jobs on docs-only changes (#3295)

Add a `changes` job using dorny/paths-filter to detect whether any
non-documentation files were modified. All heavy jobs (lint, docker,
test, docker-tests, proto) are gated behind this check and skipped
when the PR only touches docs/** or markdown files.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs: brand-aligned syntax theme for code blocks (#3294)

* docs: better code readability

* chore: restore yarn.lock to main

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(style): address PR review feedback

- Add `"type": "dark"` to ev-dark.json theme manifest
- Raise punctuation token contrast from #505050 to #767676 (WCAG AA)
- Align --vp-code-block-color CSS var with ev-dark default text (#dbd7ca)
- Use ThemeRegistration type instead of `as any` in config.ts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(cache): reduce tx cache retention (#3299)

* docs: high availability sequencer guide (#3293)

* docs: ev-node high availability

* docs: node placement

* docs(ha): address PR review feedback

Critical fixes:
- Fix snapshot_threshold math: 5000 ÷ 10 = 500s ≈ 8.3 min (not 83s)
- Fix trailing_logs math: 18000 ÷ 10 = 1800s = 30 min (not 5 min)

Medium fixes:
- Fix heartbeat_timeout description: it is a follower-side election trigger,
  not the interval at which the leader sends heartbeats
- Add explicit restart instruction after Step 5 data copy in single-to-ha.md
  so the chain keeps producing blocks during preparation (Steps 6-8)
- Replace priv_validator_key.json with signer.json in single-to-ha.md
  to match cluster-setup.md and the E2E tests

Minor fixes:
- Exclude self from raft.peers in all examples (cluster-setup.md node-1
  yaml/CLI/systemd, single-to-ha.md node-1 and node-2)
- Add "exclude local node" note to raft.peers description in overview.md
- Fix P2P port in overview.md Interaction with P2P section (7676 → 26656)
- Add text language tag to all bare fenced blocks (MD040): multiaddr
  example, RTT equations, and all log snippets

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): absorb raft_production.md into ha/overview.md

raft_production.md had no sidebar entry and its content was fully
superseded by the new ha/ guides. Extract the three pieces that were
unique to it — bootstrap flag docs, auto-detection startup mode
explanation, and static-membership limitation note — into
ha/overview.md, then delete the file.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): use EnvironmentFile for signer passphrase

Passing --evnode.signer.passphrase inline exposes the secret in
ps aux, journalctl, and shell history.

- Add EnvironmentFile=/etc/ev-node/env (chmod 600) to the systemd
  unit in cluster-setup.md with setup instructions
- Replace all inline <YOUR_PASSPHRASE> occurrences with
  $EV_SIGNER_PASSPHRASE sourced from /etc/ev-node/env in every
  evm start / evm init snippet across both guides

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): explicit node-2 peers and action-based rolling restart

- Replace "peers list is identical" stub in node-2 config with an
  explicit peers list that excludes node-2 itself, and add a note
  that each node must omit itself from raft.peers
- Replace "Wait ~30 seconds" in rolling restart with journalctl
  one-liners that exit as soon as the node logs follower/leader state,
  giving a deterministic signal instead of an arbitrary timeout

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): fix raft.peers self-inclusion startup bug

The abbreviated node-2 snippet with "# peers list is identical" caused
a startup failure: with raft_addr=0.0.0.0:5001 the bootstrap code's
literal address comparison does not recognise node-2@10.0.0.2:5001 as
self, so node-2 is appended twice and deduplicateServers returns
"duplicate peers found in config".

- Fix intro text: "only raft.node_id and raft_addr differ" →
  "raft.node_id is unique; raft.peers and p2p.peers must exclude self"
- Expand node-2 snippet to a full evnode.yaml with the correct peers
  list (node-1, node-3, node-4, node-5 — no node-2) and an inline
  explanation of the wildcard address pitfall
- Align overview.md trailing_logs example to 1 block/s (matching
  block_time: "1s" used throughout) and note the 10 block/s rate too

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): fix passphrase flag and failover kill cardinality check

Replace non-existent --evnode.signer.passphrase with the actual
--evnode.signer.passphrase_file flag throughout cluster-setup and
single-to-ha guides. Update passphrase setup to create a chmod 600
file at /etc/ev-node/passphrase referenced directly by the flag.

Add mapfile-based cardinality check in the failover test fallback
kill command to guard against killing the wrong process.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* docs(ha): fix RPC endpoints, init ordering, and snap_count CLI flag

Replace incorrect CometBFT RPC calls (port 26657/status) with the
actual ev-node HTTP API (port 7331 /health/ready, /raft/node) and
EVM execution layer (cast block latest) throughout both guides.

Align single-to-ha Step 2 init ordering with cluster-setup: create
passphrase file before evm init so the signer key is encrypted from
the start, and pass --evnode.node.aggregator and passphrase_file flags.

Fix Step 9a fallback kill in single-to-ha to use mapfile cardinality
check, matching the pattern already applied in cluster-setup.

Add --evnode.raft.snap_count=3 to the CLI start example to match
the YAML config block.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* perf(store): save metadata async (#3298)

* perf(store): save metadata async

* cl

* Optimize metadata writes with batching

* feedback

* De-duplicate batched writes by key in cached store

* fix

* updates

* chore(deps): security  (#3296)

* fix security deps

* fix helpers

* feat: add grpc socket and flattn tx batches to allow for lower allocations (#3297)

* add grpc socket and flattn tx batches to allow for lower allocations

* redo proto

* docs: update changelog for grpc execution transport

* remove extra txs

* refactor(execution/grpc): move execution service where it belongs (#3302)

* refactor(execution/grpc): move execution service where it belongs

* reduce diff

* fix lint

* feat(execution/grpc): adding support for grpc otlp (#3300)

* feat: adding support for grpc oltp

* chore: fix linting

* cl

---------

Co-authored-by: Julien Robert <julien@rbrt.fr>

* chore: fix some minor issues in comments (#3304)

* build(deps): Bump dorny/paths-filter from 3 to 4 (#3308)

Bumps [dorny/paths-filter](https://github.com/dorny/paths-filter) from 3 to 4.
- [Release notes](https://github.com/dorny/paths-filter/releases)
- [Changelog](https://github.com/dorny/paths-filter/blob/master/CHANGELOG.md)
- [Commits](dorny/paths-filter@v3...v4)

---
updated-dependencies:
- dependency-name: dorny/paths-filter
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat(pkg/sequencers): add queue limit in solo sequencer (#3312)

* feat(pkg/sequencers): add queue limit in solo sequencer

* use option

* cl

* move test files

* fix: address key rotation CI failures

* fix: repair markdown link checks

* test: stabilize sync loop persistence test

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Julien Robert <julien@rbrt.fr>
Co-authored-by: auricom <27022259+auricom@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Marko <marko@baricevic.me>
Co-authored-by: Cian Hatton <github.qpeyb@simplelogin.fr>
Co-authored-by: criciss <cricis@msn.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants